首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   431篇
  免费   21篇
电工技术   7篇
化学工业   81篇
金属工艺   7篇
机械仪表   17篇
建筑科学   19篇
能源动力   11篇
轻工业   10篇
石油天然气   2篇
无线电   81篇
一般工业技术   84篇
冶金工业   35篇
原子能技术   2篇
自动化技术   96篇
  2023年   7篇
  2022年   5篇
  2021年   12篇
  2020年   17篇
  2019年   15篇
  2018年   15篇
  2017年   14篇
  2016年   13篇
  2015年   10篇
  2014年   16篇
  2013年   35篇
  2012年   16篇
  2011年   23篇
  2010年   11篇
  2009年   25篇
  2008年   22篇
  2007年   8篇
  2006年   18篇
  2005年   13篇
  2004年   7篇
  2003年   14篇
  2002年   7篇
  2001年   8篇
  2000年   7篇
  1999年   6篇
  1998年   20篇
  1997年   10篇
  1996年   9篇
  1995年   14篇
  1994年   7篇
  1993年   6篇
  1992年   5篇
  1991年   8篇
  1990年   6篇
  1989年   5篇
  1988年   1篇
  1986年   4篇
  1985年   4篇
  1983年   3篇
  1982年   2篇
  1979年   1篇
  1977年   1篇
  1976年   1篇
  1974年   1篇
排序方式: 共有452条查询结果,搜索用时 46 毫秒
51.
Faults are expected to play an increasingly important role in how algorithms and applications are designed to run on future extreme-scale systems. Algorithm-based fault tolerance is a promising approach that involves modifications to the algorithm to recover from faults with lower overheads than replicated storage and a significant reduction in lost work compared to checkpoint-restart techniques. Fault-tolerant linear algebra algorithms employ additional processors that store parities along the dimensions of a matrix to tolerate multiple, simultaneous faults. Existing approaches assume regular data distributions (blocked or block-cyclic) with the failures of each data block being independent. To match the characteristics of failures on parallel computers, we extend these approaches to mapping parity blocks in several important ways. First, we handle parity computation for generalized Cartesian data distributions with each processor holding arbitrary subsets of blocks in a Cartesian-distributed array. Second, techniques to handle correlated failures, i.e., multiple processors that can be expected to fail together, are presented. Third, we handle the colocation of parity blocks with the data blocks and do not require them to be on additional processors. Several alternative approaches, based on graph matching, are presented that attempt to balance the memory overhead on processors while guaranteeing the same fault tolerance properties as existing approaches that assume independent failures on regular blocked data distributions. Evaluation of these algorithms demonstrates that the additional desirable properties are provided by the proposed approach with minimal overhead.  相似文献   
52.
Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application tasks with dependences. These applications exhibit both task and data parallelism, and combining these two (also called mixed parallelism) has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task and data parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisions are made in an integrated manner and are based on several factors such as the structure of the task graph, the runtime estimates and scalability characteristics of the tasks, and the intertask data communication volumes. A locality-conscious scheduling strategy is used to improve intertask data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications and synthetic graphs shows that our algorithm consistently generates schedules with a lower makespan as compared to Critical Path Reduction (CPR) and Critical Path and Allocation (CPA), two previously proposed scheduling algorithms. Our algorithm also produces schedules that have a lower makespan than pure task- and data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.  相似文献   
53.
This paper presents a formal specification and a proof of correctness for the widely-used Force-Directed List Scheduling (FDLS) algorithm for resource-constrained scheduling of data flow graphs in high-level synthesis systems. The proof effort is conducted using a higher-order logic theorem prover. During the proof effort many interesting properties of the FDLS algorithm are discovered. These properties are formally stated and proved in a higher-order logic theorem proving environment. These properties constitute a detailed set of formal assertions and invariants that should hold at various steps in the FDLS algorithm. They are then inserted as programming assertions in the implementation of the FDLS algorithm in a production-strength high-level synthesis system. When turned on, the programming assertions (1) certify whether a specific run of the FDLS algorithm produced correct schedules and, (2) in the event of failure, help discover and isolate programming errors in the FDLS implementation.We present a detailed example and several experiments to demonstrate the effectiveness of these assertions in discovering and isolating errors. Based on this experience, we discuss the role of the formal theorem proving exercise in developing a useful set of assertions for embedding in the scheduler code and argue that in the absence of such a formal proof checking effort, discovering such a useful set of assertions would have been an arduous if not impossible task.  相似文献   
54.
Three-dimensional models of objects and their creation process are central for a variety of applications in Augmented Reality. In this article, we present a system that is designed for in-situ modeling using interactive techniques for two generic versions of handheld devices equipped with cameras. The system allows online building of 3D wireframe models through a combination of user interaction and automated methods. In particular, we concentrate in rigorous evaluation of the two devices and interaction methods in the context of 3D feature selection. We present the key components of our system, discuss our findings and results and identify design recommendations.  相似文献   
55.
A priori design of catalysts is not yet possible. Such task would demand unavailable scientific knowledge of the correlations among synthesis parameters and resulting solid state and surface structures, on the one hand, and among those atomic-level structural details and their catalytic functions, on the other hand. To avoid testing every possible combination, therefore, the applied chemist or chemical engineer must identify empirical correlations underlying the existing experimental data base.

The ability of artificial neural networks to identify complex correlations and to predict the result of experiments has recently generated considerable interest in various areas of science and engineering. In this paper, neural networks are used to identify composition-performance relationships in automobile exhaust catalysts.

This work employs an artificial neural network technique to do a sensitivity analysis of the conversions of pollutant gases as a function of the catalyst composition and the operating conditions. This approach converges on the optimum catalyst composition and operating condition in order to produce specified conversions of carbon monoxide, hydrocarbons and nitrogen oxides, to carbon dioxide, water and di-nitrogen respectively.  相似文献   
56.
This paper presents a graphical technique to locate the center of curvature of the path traced by a coupler point of a planar, single-degree-of-freedom, geared seven-bar mechanism. Since this is an indeterminate mechanism then the pole for the instantaneous motion of the coupler link; i.e., the point coincident with the instantaneous center of zero velocity for this link, cannot be obtained from the Aronhold–Kennedy theorem. The graphical technique that is presented in the first part of the paper to locate the pole is believed to be an important contribution to the kinematics literature. The paper then focuses on the graphical technique to locate the center of curvature of the path traced by an arbitrary coupler point. The technique begins with replacing the seven-bar mechanism by a constrained five-bar linkage whose links are kinematically equivalent to the second-order properties of motion. Then three kinematic inversions are investigated and a four-bar linkage is obtained from each inversion. The motion of the coupler link of the final four-bar linkage is equivalent up to and including the second-order properties of motion of the coupler of the geared seven-bar. Then the center of curvature of the path traced by an arbitrary coupler point can be obtained from existing techniques, such as the Euler–Savary equation. An analytical method, referred to as the method of kinematic coefficients, is presented as an independent check of the graphical technique.  相似文献   
57.
This paper presents an on-line learning failure-tolerant neural controller capable of controlling buildings subjected to severe earthquake ground motions. In the proposed scheme the neural controller aids a conventional H∞ controller designed to reduce the response of buildings under earthquake excitations. The conventional H∞ controller is designed to reduce the structural responses for a suite of severe earthquake excitations using specially designed frequency domain weighting filters. The neural controller uses a sequential learning radial basis function neural network architecture called extended minimal resource allocating network. The parameters of the neural network are adapted on-line with no off-line training. The performance of the proposed neural-aided controller is illustrated using simulation studies for a two degree of freedom structure equipped with one actuator on each floor. Results are presented for the cases of no failure and failure of the actuator on each of the two floors under several earthquake excitations. The study indicates that the performance of the proposed neural-aided controller is superior to that of the H∞ controller under no actuator failure conditions. In the presence of actuator failures, the performance of the primary H∞ controller degrades considerably, since actuator failures have not been considered for the design. Under these circumstances, the neural-aided controller is capable of controlling the acceleration and displacement structural responses. In many cases, using the neural-aided controller, the response magnitudes under failure conditions are comparable to the performance of the H∞ controller under no-failure conditions.  相似文献   
58.
Response surface methodology (RSM) based on a D-optimal design was employed to investigate the tribological characteristics of journal bearing materials such as brass, bronze, and copper lubricated by a biolubricant, chemically modified rapeseed oil (CMRO). The wear and friction performance were observed for the bearing materials tested with TiO2, WS2, and CuO nanoadditives dispersed in the CMRO. The tests were performed by selecting sliding speed and load as numerical factors and nano-based biolubricant/bearing materials as the categorical factor to evaluate the tribological characteristics such as the coefficient of friction (COF) and specific wear rate. The results showed that RSM based on a D-optimal design was instrumental in the selection of suitable journal bearing materials for a typical system, especially one lubricated by nano-based biolubricant. At a sliding speed of 2.0 m/s and load of 100 N, the bronze bearing material with CMRO containing CuO nanoparticles had the lowest COF and wear rate. In addition, scanning electron microscopy (SEM) examination of the worn bearing surfaces showed that the bronze bearing material lubricated with CMRO containing CuO nanoadditive is smoother than copper/brass bearing material.  相似文献   
59.
Stress corrosion cracking of a commercial 0.19 pct C steel (SA-516 Grade 70) was studied in hot (92 ‡C) caustic solutions of NaOH and NaOH plus aluminate (AlO 2) species. Potentiostatically controlled tests were conducted near the active-passive transition, using fracture mechanics testing techniques and fatigue precracked double cantilever beam specimens. Crack propagation rates (Ν) were determined for a range of stress intensities (K l). In simple NaOH solutions, Region I (K1-dependent) and Region II (K 1-independent) cracking behavior were observed. Increasing the concentration of NaOH from2m to8m decreasedK ISCC and displaced Region I and the onset of Region II to lowerK 1 levels. The presence of AlO 2 produced a comparable effect, with Region II being extended to lowerK I -Ν levels relative to simple NaOH solutions of similar hydroxyl anion concentration. The overallK I -Νv behavior and fractography were consistent with a dissolution mechanism of crack advance based on the general principles of the film rupture-dissolution model. The effect of environment composition uponK I — Ν behavior was attributed to changes in repassivation kinetics. Formerly Graduate Student at the University of British Columbia  相似文献   
60.
Multifactorial diseases, which include the common congenital abnormalities (incidence: 6%) and chronic diseases with onset predominantly in adults (population prevalence: 65%), contribute substantially to human morbidity and mortality. Their transmission patterns do not conform to Mendelian expectations. The model most frequently used to explain their inheritance and to estimate risks to relatives is a Multifactorial Threshold Model (MTM) of disease liability. The MTM assumes that: (i) the disease is due to the joint action of a large number of genetic and environmental factors, each of which contributing a small amount of liability, (ii) the distribution of liability in the population is Gaussian and (iii) individuals whose liability exceeds a certain threshold value are affected by the disease. For most of these diseases, the number of genes involved or the environmental factors are not fully known. In the context of radiation exposures of the population, the question of the extent to which induced mutations will cause an increase in the frequencies of these diseases has remained unanswered. In this paper, we address this problem by using a modified version of MTM which incorporates mutation and selection as two additional parameters. The model assumes a finite number of gene loci and threshold of liability (hence, the designation, Finite-Locus Threshold Model or FLTM). The FLTM permits one to examine the relationship between broad-sense heritability of disease liability and mutation component (MC), the responsiveness of the disease to a change in mutation rate. Through the use of a computer program (in which mutation rate, selection, threshold, recombination rate and environmental variance are input parameters and MC and heritability of liability are output estimates), we studied the MC-heritability relationship for (i) a permanent increase in mutation rate (e.g., when the population sustains radiation exposure in every generation) and (ii) a one-time increase in mutation rate. Our investigation shows that, for a permanent increase in mutation rate of 15%, MC in the first few generations is of the order of 1-2%. This conclusion holds over a broad range of heritability values above about 30%. At equilibrium, however, MC reaches 100%. For a one-time increase in mutation rate, MC reaches its maximum value (of 1-2%) in the first generation, followed by a decline to zero in subsequent generations. These conclusions hold for so many combinations of parameter values (i.e., threshold, selection coefficient, number of loci, environmental variance, spontaneous mutation rate, increases in mutation rate, levels of 'interaction' between genes and recombination rates) that it can be considered to be relatively robust. We also investigated the biological validity of the FLTM in terms of the minimum number of loci, their mutation rates and selection coefficients needed to explain the incidence of multifactorial diseases using the theory of genetic loads. We argue that for common multifactorial diseases, selection coefficients are small in present-day human populations. Consequently, with mutation rates of the order known for Mendelian genes, the FLTM with a few loci and weak selection provides a good approximation for studying the responsiveness of multifactorial diseases to radiation exposures.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号